Abstract:To achieve more efficient utilization of image feature information and improve the tracking accuracy as well as robustness of the target, an improved super pixel tracking algorithm via fusing salient region detection into spatio-temporal context is proposed. Firstly, super pixel segmentation is conducted in the context region of the target, then the motion relevance of target context and the regional covariance information are utilized to calculate the correlation saliency of the image super pixels. Based on Bayesian framework, the model fusing saliency detection into spatio-temporal context is built in the frequency domain. Next, the color and texture histograms of current frame and reference template are employed to calculate the Bhattacharyya coefficient and update the spatial and temporal context model. The scale pyramid model is introduced to estimate the target scale. Finally, the adaptive motion prediction module is incorporated by updating online dynamic model sample set and using ridge regression method to determine the parameters of a low pass filter. Experimental results on public database indicate the superiority of the proposed algorithm over other algorithms in illumination change, complex background, object rotation, high mobility, low resolution.
[1] LEE J Y, YU W. Visual Tracking by Partition-Based Histogram Backprojection and Maximum Support Criteria // Proc of the IEEE International Conference on Robotics and Biomimetics. Washington, USA: IEEE, 2011: 2860-2865. [2] VOJIR T, NOSKOVA J, MATAS J. Robust Scale-Adaptive Mean-Shift for Tracking. Pattern Recognition Letters, 2014, 49: 250-258. [3] KRISTAN M, MATAS J, LEOMARDIS A, et al. The Visual Object Tracking Vot2015 Challenge Results[C/OL]. [2016-10-25]. http://www.votchallenge.net/vot2015/download/vot_2015_paper.pdf. [4] ROSS D A, LIM J, LIN R S, et al. Incremental Learning for Robust Visual Tracking. International Journal of Computer Vision, 2008, 77(1/2/3): 125-141. [5] ZHANG K H, ZHANG L, YANG M H. Real-Time Compressive Tracking // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2012: 864-877. [6] BAO C L, WU Y, LING H B, et al. Real Time Robust l1 Tracker Using Accelerated Proximal Gradient Approach // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2012: 1830-1837. [7] DOLLR P, BELONGIE S J, PERONA P. The Fastest Pedestrian Detector in the West[C/OL]. [2016-12-25]. http://vision.ucsd.edu/sites/default/files/FPDW_0.pdf. [8] DINH T B, VO N, MEDIONI G. Context Tracker: Exploring Su-pporters and Distracters in Unconstrained Environments // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2011: 1177-1184. [9] BINH N D. Online Boosting-Based Object Tracking // Proc of the 12th International Conference on Advances in Mobile Computing and Multimedia. New York, USA: ACM, 2014: 194-202. [10] HARE S, GOLODETZ S, SAFFARI A, et al. Struck: Structured Output Tracking with Kernels. IEEE Transactions on Pattern Ana-lysis and Machine Intelligence, 2016, 38(10): 2096-2109. [11] BABENKO B, YANG M H, BELONGIE S. Robust Object Trac-king with Online Multiple Instance Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632. [12] NAM H, HAN B. Learning Multi-domain Convolutional Neural Networks for Visual Tracking // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 4293-4302. [13] DANELLJAN M, H GER G, KHAN F S, et al. Learning Spatia-lly Regularized Correlation Filters for Visual Tracking // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2015: 4310-4318. [14] WANG N Y, LI S Y, CUPTA A, et al. Transferring Rich Feature Hierarchies for Robust Visual Tracking[J/OL]. [2016-10-25]. https://arxiv.org/pdf/1501.04587.pdf. [15] GRABNER H, MATAS J, VAN GOOL L, et al. Tracking the Invisible: Learning Where the Object Might Be // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2010: 1285-1292. [16] BAY H, TUYTELAARS T, VAN GOOL L. SURF: Speeded Up Robust Features // Proc of the 9th European Conference on Computer Vision. Berlin, Germany: Springer, 2006, I: 404-417. [17] ZHANG K H, ZHANG L, YANG M H, et al. Fast Tracking via Spatio-Temporal Context Learning[J/OL]. [2016-10-25]. https://arxiv.org/pdf/1311.1939v1.pdf. [18] 张旭东,吕言言,缪永伟,等.结合区域协方差分析的图像显著性检测.中国图象图形学报, 2016, 21(5): 605-615. (ZHANG X D, L Y Y, MIAO Y W, et al. Image Saliency Detection Using Regional Covariance Analysis. Journal of Image and Graphics, 2016, 21(5): 605-615. ) [19] 钱 琨,周慧鑫,秦翰林,等.基于引导滤波与时空上下文的红外弱小目标跟踪.光子学报, 2015, 44(9): 0910003-1-0910003-6. (QIAN K, ZHOU H X, QIN H L, et al. Infrared Dim-Small Target Tracking Based on Guide Filter and Spatio-Temporal Context Learning. Acta Photonica Sinica, 2015, 44(9): 0910003-1-0910003-6. ) [20] DANELLJAN M, H GER G, KHAN F S, et al. Accurate Scale Estimation for Robust Visual Tracking[C/OL]. [2016-10-25]. http://www.bmva.org/bmvc/2014/files/paper038.pdf. [21] CHEN K, JIANG G Y, JHUN C G. Effect of Pole-Placement on Precision of Video Target Predicative Tracking Using Enhanced Full State Observer // Proc of the Chinese Control and Decision Conference. Washington, USA: IEEE, 2011: 13-18. [22] 徐建强,陆 耀.一种基于加权时空上下文的鲁棒视觉跟踪算法.自动化学报, 2015, 41(11): 1901-1912. (XU J Q, LU Y. Robust Visual Tracking via Weighted Spatio-Temporal Context Learning. Acta Automatica Sinica, 2015, 41(11): 1901-1912.) [23] WU Y, LIM J, YANG M H. Online Object Tracking: A Benchmark // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2013: 2411-2418.